Goto

Collaborating Authors

 member state


Europe Is Bending the Knee to the US on Tech Policy

WIRED

The Trump administration's pressure on European regulators is having an impact, with fewer restrictions on Big Tech and canceled measures. Almost everything is on hiatus. The EU AI Act, Digital Services Act, and Digital Markets Act are all at risk. The European Commission is preparing to end the year with virtually no movement on its most important tech policy initiatives. Many measures may even be reversed.


EU sets 2027 target for anti-drone system to defend against Russia

BBC News

EU foreign policy chief Kaja Kallas has said a new anti-drone system should be fully operational by the end of 2027, as part of a drive to toughen defences against Russia and be fully prepared for possible conflict by 2030. Drones are already redefining warfare. Having drone defences is no longer optional for anyone, Kallas said, referring to Russia's ongoing war in Ukraine and fears that Moscow may attack the EU. The European Commission's defence roadmap also proposes strengthening the EU's eastern borders and building air and space shields. Several EU nations have faced Russian incursions into their airspace and US President Donald Trump has urged the bloc to do more to defend itself.


The Sandbox Configurator: A Framework to Support Technical Assessment in AI Regulatory Sandboxes

Buscemi, Alessio, Simonetto, Thibault, Pagani, Daniele, Castignani, German, Cordy, Maxime, Cabot, Jordi

arXiv.org Artificial Intelligence

The systematic assessment of AI systems is increasingly vital as these technologies enter high-stakes domains. To address this, the EU's Artificial Intelligence Act introduces AI Regulatory Sandboxes (AIRS): supervised environments where AI systems can be tested under the oversight of Competent Authorities (CAs), balancing innovation with compliance, particularly for startups and SMEs. Yet significant challenges remain: assessment methods are fragmented, tests lack standardisation, and feedback loops between developers and regulators are weak. To bridge these gaps, we propose the Sandbox Configurator, a modular open-source framework that enables users to select domain-relevant tests from a shared library and generate customised sandbox environments with integrated dashboards. Its plug-in architecture aims to support both open and proprietary modules, fostering a shared ecosystem of interoperable AI assessment services. The framework aims to address multiple stakeholders: CAs gain structured workflows for applying legal obligations; technical experts can integrate robust evaluation methods; and AI providers access a transparent pathway to compliance. By promoting cross-border collaboration and standardisation, the Sandbox Configurator's goal is to support a scalable and innovation-friendly European infrastructure for trustworthy AI governance.


Russia expanding chemical weapons use in Ukraine, say European spy agencies

Al Jazeera

Russia has intensified its use of chemical weapons against Ukrainian soldiers in a serious violation of international law, the Dutch and German intelligence agencies have said. On Friday, they said there was extensive evidence that Moscow's forces were using banned products, including the choking agent chloropicrin. Russia denies using the prohibited weapons, as does Ukraine. On Wednesday, Maria Zakharova, the spokesperson for the Russian foreign ministry, claimed that the Federal Security Service found a cache of Ukrainian weapons in the east of the country containing chloropicrin. "It is normalised and widespread. Chloropicrin is dropped by drones to drive soldiers out of trenches, and then kill them," Dutch Defence Minister Ruben Brekelmans said in a post on X. Brekelmans, who is now calling for tougher sanctions against Russia, described the use of chemical weapons as "horrible and unacceptable".


Assessing the Performance Gap Between Lexical and Semantic Models for Information Retrieval With Formulaic Legal Language

Mori, Larissa, de Oliveira, Carlos Sousa, Yih, Yuehwern, Ventresca, Mario

arXiv.org Artificial Intelligence

Legal passage retrieval is an important task that assists legal practitioners in the time-intensive process of finding relevant precedents to support legal arguments. This study investigates the task of retrieving legal passages or paragraphs from decisions of the Court of Justice of the European Union (CJEU), whose language is highly structured and formulaic, leading to repetitive patterns. Understanding when lexical or semantic models are more effective at handling the repetitive nature of legal language is key to developing retrieval systems that are more accurate, efficient, and transparent for specific legal domains. To this end, we explore when this routinized legal language is better suited for retrieval using methods that rely on lexical and statistical features, such as BM25, or dense retrieval models trained to capture semantic and contextual information. A qualitative and quantitative analysis with three complementary metrics shows that both lexical and dense models perform well in scenarios with more repetitive usage of language, whereas BM25 performs better than the dense models in more nuanced scenarios where repetition and verbatim~quotes are less prevalent and in longer queries. Our experiments also show that BM25 is a strong baseline, surpassing off-the-shelf dense models in 4 out of 7 performance metrics. However, fine-tuning a dense model on domain-specific data led to improved performance, surpassing BM25 in most metrics, and we analyze the effect of the amount of data used in fine-tuning on the model's performance and temporal robustness. The code, dataset and appendix related to this work are available on: https://github.com/larimo/lexsem-legal-ir.


Can LLMs Ground when they (Don't) Know: A Study on Direct and Loaded Political Questions

Lachenmaier, Clara, Sieker, Judith, Zarrieß, Sina

arXiv.org Artificial Intelligence

Communication among humans relies on conversational grounding, allowing interlocutors to reach mutual understanding even when they do not have perfect knowledge and must resolve discrepancies in each other's beliefs. This paper investigates how large language models (LLMs) manage common ground in cases where they (don't) possess knowledge, focusing on facts in the political domain where the risk of misinformation and grounding failure is high. We examine the ability of LLMs to answer direct knowledge questions and loaded questions that presuppose misinformation. We evaluate whether loaded questions lead LLMs to engage in active grounding and correct false user beliefs, in connection to their level of knowledge and their political bias. Our findings highlight significant challenges in LLMs' ability to engage in grounding and reject false user beliefs, raising concerns about their role in mitigating misinformation in political discourse.


SynLexLM: Scaling Legal LLMs with Synthetic Data and Curriculum Learning

Upadhyay, Ojasw, Saravanakumar, Abishek, Ismail, Ayman

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are powerful but often require extensive fine-tuning and large datasets for specialized domains like law. General-purpose pre-training may not capture legal nuances, and acquiring sufficient legal data is challenging. We introduce SynLexLM, a novel approach to efficiently pre-train a legal LLM. Our method employs curriculum learning, progressing from simple to complex legal texts and queries, combined with synthetic data augmentation using models like Gemini Pro to address data scarcity. We aim to achieve improved performance on legal benchmarks (BigLaw-Bench, EUR-Lex-Sum) compared to traditional models and fine-tuned versions. Preliminary work involves generating synthetic QA pairs reflecting legal reasoning. This work aims to enhance legal document analysis and research tools, potentially democratizing access to advanced legal AI.


EU weighs 841bn 'rearm' Europe plan to counter possible US disengagement

Al Jazeera

European Commission (EC) President Ursula von der Leyen has proposed a five-part plan to mobilise some 800bn euros ( 842bn) to beef up Europe's defence and provide "immediate" military support to Ukraine after the United States suspended aid. "A new era is upon us," the president said in a letter presenting the plan to 27 European Union (EU) leaders on Tuesday, two days before a summit aimed at cementing joint action on Ukraine and Europe's long-term security begins in Brussels. "Europe faces a clear and present danger on a scale that none of us has seen in our adult lifetime," she wrote. European leaders are under huge pressure to increase defence spending as US President Donald Trump's return to power has delivered a rude wake-up call that they cannot blindly rely on Washington. The joint borrowing would go towards building pan-European capability domains like air and missile defence, artillery systems, missiles and ammunition, drones and anti-drone systems or to address other needs from cyber- to military mobility, the EC said.


The EU AI Act and the Wager on Trustworthy AI

Communications of the ACM

Artificial intelligence (AI) systems are increasingly supplementing or taking over tasks previously performed by humans. On the one hand, this relates to low-risk tasks, such as recommending books or movies, or recommending purchases based on previous buying behavior. But it also includes crucial decision making by highly autonomous systems. Many current systems are opaque in the sense that their internal principles of operation are unknown, leading to severe safety and regulation problems. Once trained, deep-learning systems perform well, but they are subject to surprising vulnerabilities when confronted with adversarial images.9 The decisions may be explicated after the fact, but these systems carry the risk of wrong decisions affecting the well being of people.


The United Nations Wants to Treat AI With the Same Urgency as Climate Change

WIRED

A United Nations report released today proposes having the international body oversee the first truly global effort for monitoring and governing artificial intelligence. The report, produced by the UN secretary general's High Level Advisory Body on AI, recommends the creation of a body similar to the Intergovernmental Panel on Climate Change to gather up-to-date information on AI and its risks. The report calls for a new policy dialog on AI so that the UN's 193 members can discuss risks and agree upon actions. It further recommends that the UN take steps to empower poorer nations, especially those in the global south, to benefit from AI and contribute to its governance. These should include, it says, creating an AI fund to back projects in these nations, establishing AI standards and data-sharing systems, and creating resources such as training to help nations with AI governance.